Self-Driving Car Engineer Nanodegree

Deep Learning

Project: Build a Traffic Sign Recognition Classifier

In this notebook, a template is provided for you to implement your functionality in stages which is required to successfully complete this project. If additional code is required that cannot be included in the notebook, be sure that the Python code is successfully imported and included in your submission, if necessary. Sections that begin with 'Implementation' in the header indicate where you should begin your implementation for your project. Note that some sections of implementation are optional, and will be marked with 'Optional' in the header.

In addition to implementing code, there will be questions that you must answer which relate to the project and your implementation. Each section where you will answer a question is preceded by a 'Question' header. Carefully read each question and provide thorough answers in the following text boxes that begin with 'Answer:'. Your project submission will be evaluated based on your answers to each of the questions and the implementation you provide.

Note: Code and Markdown cells can be executed using the Shift + Enter keyboard shortcut. In addition, Markdown cells can be edited by typically double-clicking the cell to enter edit mode.

Python imports

In [2]:
import sys
import os
import hashlib
import pickle
import time
from urllib.request import urlretrieve
import urllib
import random
from zipfile import ZipFile
from tqdm import tqdm

import imutils


import tensorflow as tf
from tensorflow.contrib.layers import flatten

import numpy as np
import pandas as pd





import skimage.data
import skimage.transform
from   skimage.transform import resize

import matplotlib.pyplot as plt
import matplotlib.patches as patches
import matplotlib.image as mpimg
import matplotlib.gridspec as gridspec


import math
import cv2
import time as time
from sklearn.preprocessing import OneHotEncoder
from sklearn.cross_validation import train_test_split
from sklearn.metrics import confusion_matrix
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import LabelBinarizer
from sklearn.utils import resample
from sklearn.utils import shuffle

import prettytensor as pt
from PIL import Image
import time
from datetime import timedelta

%matplotlib inline
print('All modules imported.')
All modules imported.

Step 0: Load The Data

In [3]:
# Load pickled data
#import pickle

# TODO: Fill this in based on where you saved the training and testing data

training_file = "/home/octo/Desktop/traffic-signs-data/train.p"
testing_file =  "/home/octo/Desktop/traffic-signs-data/test.p"

with open(training_file, mode='rb') as f:
    train = pickle.load(f)
with open(testing_file, mode='rb') as f:
    test = pickle.load(f)
    
X_train, y_train = train['features'], train['labels']
X_test, y_test = test['features'], test['labels']

Step 1: Dataset Summary & Exploration

The pickled data is a dictionary with 4 key/value pairs:

  • 'features' is a 4D array containing raw pixel data of the traffic sign images, (num examples, width, height, channels).
  • 'labels' is a 2D array containing the label/class id of the traffic sign. The file signnames.csv contains id -> name mappings for each id.
  • 'sizes' is a list containing tuples, (width, height) representing the the original width and height the image.
  • 'coords' is a list containing tuples, (x1, y1, x2, y2) representing coordinates of a bounding box around the sign in the image. THESE COORDINATES ASSUME THE ORIGINAL IMAGE. THE PICKLED DATA CONTAINS RESIZED VERSIONS (32 by 32) OF THESE IMAGES

Complete the basic data summary below.

In [5]:
### Replace each question mark with the appropriate value.

# TODO: Number of training examples
n_train = len(X_train)

# TODO: Number of testing examples.
n_test = len(X_test)

# TODO: What's the shape of an traffic sign image?
image_shape = train['features'].shape

# TODO: How many unique classes/labels there are in the dataset.
n_classes = max(train['labels']) - min(train['labels']) + 1

print("Number of training examples =", n_train)
print("Number of testing examples =", n_test)
print("Image data shape =", image_shape)
print("Number of classes =", n_classes)
Number of training examples = 39209
Number of testing examples = 12630
Image data shape = (39209, 32, 32, 3)
Number of classes = 43

Extra Summary & Exploration

In [6]:
n_test = len(X_test)
print("Number of testing examples =", n_test)
f15 = train['features'][15]
image_shape = train['features'][15].shape
print("Image data shape =", image_shape)
n_classes = max(train['labels']) - min(train['labels']) + 1
print("Number of classes =", n_classes)
fig,ax = plt.subplots(1)
ax.imshow(f15);
plt.figure()
plt.imshow(f15[:, :, 0])
plt.figure()
plt.imshow(f15[:, :, 1])
plt.figure()
plt.imshow(f15[:, :, 2])
Number of testing examples = 12630
Image data shape = (32, 32, 3)
Number of classes = 43
Out[6]:
<matplotlib.image.AxesImage at 0x7f741d0f5240>
In [7]:
## Useful methods for plotting 
# Crop images to min length
def crop_image(img):
    """Make any image a square image.
    Parameters
    ----------
    img : np.ndarray
        Input image to crop, assumed at least 2d.
    Returns
    -------
    crop : np.ndarray
        Cropped image.
    """
    size = np.min(img.shape[:2])
    extra = img.shape[:2] - size
    crop = img
    for i in np.flatnonzero(extra):
        crop = np.take(crop, extra[i] // 2 + np.r_[:size], axis=i)
    return crop
## Compositing 
def composite(images, saveto='com_montage.png'):
    #separated by 1 pixel borders. and type numpy.ndarray
        
    # Crop every image to a square
    images = [crop_image(img_i) for img_i in images]

    # Then resize the square image to 100 x 100 pixels
    images = [resize(img_i, (100, 100)) for img_i in images]

    # Finally make our list of 3-D images a 4-D array with the first dimension the number of images:
    images = np.array(images).astype(np.float32)
    
    if isinstance(images, list):
        images = np.array(images)
    img_h = images.shape[1]
    img_w = images.shape[2]
    n_plots = int(np.ceil(np.sqrt(images.shape[0])))
    if len(images.shape) == 4 and images.shape[3] == 3:
        m = np.ones(
            (images.shape[1] * n_plots + n_plots + 1,
             images.shape[2] * n_plots + n_plots + 1, 3)) * 0.5
    else:
        m = np.ones(
            (images.shape[1] * n_plots + n_plots + 1,
             images.shape[2] * n_plots + n_plots + 1)) * 0.5
    for i in range(n_plots):
        for j in range(n_plots):
            this_filter = i * n_plots + j
            if this_filter < images.shape[0]:
                this_img = images[this_filter]
                m[1 + i + i * img_h:1 + i + (i + 1) * img_h,
                  1 + j + j * img_w:1 + j + (j + 1) * img_w] = this_img
    plt.imsave(arr=m, fname=saveto)
    return m

Visualize the German Traffic Signs Dataset using the pickled file(s). This is open ended, suggestions include: plotting traffic sign images, plotting the count of each sign, etc.

The Matplotlib examples and gallery pages are a great resource for doing visualizations in Python.

NOTE: It's recommended you start with something simple first. If you wish to do more, come back to it after you've completed the rest of the sections.

In [8]:
### Data exploration visualization goes here.
### Feel free to use as many code cells as needed.
import matplotlib.pyplot as plt
# Visualizations will be shown in the notebook.
%matplotlib inline

# Display 64 random Images

# Using coords array lets draw a bounding box around the sign in the image
x1 = train['coords'][0][0]
y1 = train['coords'][0][1]
x2 = train['coords'][0][2]
y2 = train['coords'][0][3]
# Creating rectangular patch
rect = patches.Rectangle((x1,y1), x2-x1, y2-y1, linewidth=1,edgecolor='r',facecolor='none')
ax.add_patch(rect)
plt.show()
indices = np.random.permutation(X_train.shape[0])
images_idx = indices[:64]
imgs = X_train[images_idx,:]
plt.figure(figsize=(10, 10))
plt.imshow(composite(imgs));     
In [9]:
# Display images with labels
def img_lavels(images, labels, save_fname):
    unique_labels = set(labels)
    fig = plt.figure(figsize=(20, 20))
    i = 1
    for label in unique_labels:
        # Pick the first image for each label.
        image = images[labels.index(label)]
        plt.subplot(8, 8, i)  # A grid of 8 rows x 8 columns
        plt.axis('off')
        plt.title("Label {0} ({1})".format(label, labels.count(label)))
        i += 1
        _ = plt.imshow(image)
    plt.show()
    
    # Now we can save it to a numpy array and save the image to a file
    fig.canvas.draw()
    data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep='')
    data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,))
    plt.imsave(arr=data, fname=save_fname)
In [10]:
#UNIQUE NOS
images = X_train
labels = train['labels'].tolist()

print("\nNo of unique Labels: {0}\nTotal Images: {1}".format(len(set(labels)), len(images)))

img_lavels(images, labels, "unq_labels.png")
No of unique Labels: 43
Total Images: 39209
In [11]:
# Images in a class
def img_class(images, label, save_fname):
    limit = 48 
    fig = plt.figure(figsize=(15, 5))
    i = 1

    start = labels.index(label)
    end = start + labels[start:].index(label+1)
    for image in images[start:end][:limit]:
        plt.subplot(4,12, i)  # 3 rows, 8 per row
        plt.axis('off')
        i += 1
        plt.imshow(image)
    plt.show()   
    
    # Now we can save it to a numpy array and save the image to a file
    fig.canvas.draw()
    data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep='')
    data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,))
    plt.imsave(arr=data, fname=save_fname)
In [12]:
#variations in a class 
# Lets look at class 2
img_class(images, 2, "class_2.png" )
# and class 13
img_class(images,13, "class_13.png" )

Above: Repetitions but variation of 1.scale 2.lighting 3.angle...

In [13]:
signnames = pd.read_csv('/media/octo/4E74A17774A16287/SELF DRIVING CAR/P2-deep learning/signnames.csv')
data_i = [[i,sum(y_train == i)] for i in range(len(np.unique(y_train)))]
data_i_sorted = sorted(data_i, key=lambda x: x[1])
signnames['Occurance'] = pd.Series(np.asarray(data_i_sorted).T[1], index=np.asarray(data_i_sorted).T[0])
signnames_sorted =signnames.sort_values(['Occurance'],ascending=[0]).reset_index()
signnames_sorted =signnames_sorted.drop('index', 1)
In [14]:
signnames_sorted
Out[14]:
ClassId SignName Occurance
0 2 Speed limit (50km/h) 2250
1 1 Speed limit (30km/h) 2220
2 13 Yield 2160
3 12 Priority road 2100
4 38 Keep right 2070
5 10 No passing for vehicles over 3.5 metric tons 2010
6 4 Speed limit (70km/h) 1980
7 5 Speed limit (80km/h) 1860
8 25 Road work 1500
9 9 No passing 1470
10 7 Speed limit (100km/h) 1440
11 3 Speed limit (60km/h) 1410
12 8 Speed limit (120km/h) 1410
13 11 Right-of-way at the next intersection 1320
14 35 Ahead only 1200
15 18 General caution 1200
16 17 No entry 1110
17 31 Wild animals crossing 780
18 14 Stop 780
19 33 Turn right ahead 689
20 15 No vehicles 630
21 26 Traffic signals 600
22 28 Children crossing 540
23 23 Slippery road 510
24 30 Beware of ice/snow 450
25 16 Vehicles over 3.5 metric tons prohibited 420
26 34 Turn left ahead 420
27 6 End of speed limit (80km/h) 420
28 36 Go straight or right 390
29 22 Bumpy road 390
30 40 Roundabout mandatory 360
31 20 Dangerous curve to the right 360
32 21 Double curve 330
33 39 Keep left 300
34 29 Bicycles crossing 270
35 24 Road narrows on the right 270
36 41 End of no passing 240
37 42 End of no passing by vehicles over 3.5 metric ... 240
38 32 End of all speed and passing limits 240
39 27 Pedestrians 240
40 37 Go straight or left 210
41 19 Dangerous curve to the left 210
42 0 Speed limit (20km/h) 210
In [15]:
def sign_class_name(class_id):
    return signnames[signnames['ClassId'] == class_id]['SignName'].iloc[0]
In [16]:
# Show one of the images
i = 1231
image = X_train[i]
plt.title(sign_class_name(y_train[i]))
plt.imshow(image)
Out[16]:
<matplotlib.image.AxesImage at 0x7f7417a82e10>
In [20]:
plt.figure(figsize=(4,3))
plt.bar(range(43),height=signnames_sorted["Occurance"])
Out[20]:
<Container object of 43 artists>
In [21]:
#histogram of each class.Higher deviation class wise
l = labels
x = list(set(l))
y = [l.count(i) for i in x]
width = 0.1
plt.figure(figsize=(6,3))
plt.xlabel('Labels')
plt.xticks(x, x, fontsize = 8)
plt.ylabel('Frequency')
plt.axis([min(x)-1,max(x)+1,min(y)-50,max(y)+100])
plt.grid(True)
plt.bar(x, y, width, color='g')
plt.show()
In [22]:
#Testing for test set
#This uneven frequencies/deviation, even in test set.
l = test['labels'].tolist()
x = list(set(l))
y = [l.count(i) for i in x]
width = 0.1
plt.figure(figsize=(6,3))
plt.xlabel('Labels')
plt.xticks(x, x, fontsize = 8)
plt.ylabel('Frequency')
plt.title('test set')
plt.axis([min(x)-1,max(x)+1,min(y)-50,max(y)+100])
plt.grid(True)

plt.bar(x, y, width, color='g')

plt.show()
In [30]:
#distribution of color across all  flattened images
#### Few images near to 255 and large range for bew 75. Need to Normalize.

images = X_train
flattened = images.ravel()
print(flattened[:20])
plt.hist(flattened.ravel(),255);
[ 75  78  80  74  76  78  83  84  83 101  92  85 130 107 102 153 113 114
 173 114]
In [24]:
#Compute mean and std deviation of all images and plot
mean_img = np.mean(images, axis=0)
std_img = np.std(images, axis=0)
In [26]:
bins=100
fig,axs = plt.subplots(1,3, figsize=(12,8), sharey=True, sharex=True)
axs[0].hist((images[5]).ravel(), bins)
axs[0].set_title('img distribution')

axs[1].hist((mean_img).ravel(), bins)
axs[1].set_title('mean distribution')

axs[2].hist((std_img).ravel(), bins)
axs[2].set_title('std deviation distribution')
Out[26]:
<matplotlib.text.Text at 0x7f741802eba8>
In [27]:
#effect of normalizing - data will be around 0.
#We need to preprocess like normalization of data before training
fig,axs = plt.subplots(1,2, figsize=(12,6), sharey=True, sharex=True)
axs[0].hist((images[5] - mean_img).ravel(), bins)
axs[0].set_title('img - mean distribution')

axs[1].hist(((images[5] - mean_img)/std_img).ravel(), bins)
axs[1].set_title('(img - mean)/std distribution')
Out[27]:
<matplotlib.text.Text at 0x7f741839bc18>
In [32]:
#plt.imshow(X_train[12])
# From previously figures I think preprocessing like transformation is needed before training

slightly_translated = imutils.translate(X_train[12], 3, 0)
#plt.imshow(slightly_translated)
slightly_rotated = imutils.rotate(X_train[12], angle=20)
#plt.imshow(slightly_rotated)
slightly_scaled = imutils.resize(X_train[12], width=28)
slightly_scaled = imutils.resize(slightly_scaled, width=32)
#plt.imshow(slightly_scaled)

Step 2: Design and Test a Model Architecture

Design and implement a deep learning model that learns to recognize traffic signs. Train and test your model on the German Traffic Sign Dataset.

There are various aspects to consider when thinking about this problem:

  • Neural network architecture
  • Play around preprocessing techniques (normalization, rgb to grayscale, etc)
  • Number of examples per label (some have more than others).
  • Generate fake data.

Here is an example of a published baseline model on this problem. It's not required to be familiar with the approach used in the paper but, it's good practice to try to read papers like these.

NOTE: The LeNet-5 implementation shown in the classroom at the end of the CNN lesson is a solid starting point. You'll have to change the number of classes and possibly the preprocessing, but aside from that it's plug and play!

Implementation

Use the code cell (or multiple code cells, if necessary) to implement the first step of your project. Once you have completed your implementation and are satisfied with the results, be sure to thoroughly answer the questions that follow.

In [ ]:
### Preprocess the data here.
### Feel free to use as many code cells as needed.

Question 1

Describe how you preprocessed the data. Why did you choose that technique?

Answer:

I mainly rotated, translated and resized data for training. Histogram equilization may be useful but not used here. I did not generate extra data but splitted for validation set.

In [68]:
### Generate data additional data (OPTIONAL!)
### and split the data into training/validation/testing sets here.
### Feel free to use as many code cells as needed.
In [59]:
# Translation needed

plt.imshow(X_train[5])
plt.imshow(imutils.translate(X_train[15], 0, 0))#X_train[15], 0, 2
Out[59]:
<matplotlib.image.AxesImage at 0x7f741744b358>
In [60]:
# rotation needed

plt.imshow(imutils.rotate(X_train[89], angle=0))#5
Out[60]:
<matplotlib.image.AxesImage at 0x7f74173bf208>
In [66]:
slightly_scaled = imutils.resize(imutils.resize(X_train[42], width=28), width=32)
plt.imshow(slightly_scaled)
Out[66]:
<matplotlib.image.AxesImage at 0x7f741712e3c8>
In [67]:
#Based above cases these methods are made for further use
# Transorfmation: rotation, translation, scaling.Generation additional data
def scaling(X_train, y_train, width=29):
    X_train_scaled = np.zeros_like(X_train)
    for i in range(len(X_train)):
        temp = imutils.resize(X_train[i], width)
        X_train_scaled[i] = imutils.resize(temp, 32)
    return X_train_scaled, y_train

def translation(X_train, y_train, dx=3, dy=0):
    X_train_translated = np.zeros_like(X_train)
    for i in range(len(X_train)):
        X_train_translated[i] = imutils.translate(X_train[i], dx, dy)
    return X_train_translated, y_train

def rotation(X_train, y_train, rotation_angle=5):
    X_train_rotated = np.zeros_like(X_train)
    for i in range(len(X_train)):
        X_train_rotated[i] = imutils.rotate(X_train[i], rotation_angle)
    return X_train_rotated, y_train
In [69]:
#SPLIT:Validation Set 25%

X_train, X_validation, y_train, y_validation = train_test_split(X_train, y_train, test_size=0.25, random_state=42)

Question 2

Describe how you set up the training, validation and testing data for your model. Optional: If you generated additional data, how did you generate the data? Why did you generate the data? What are the differences in the new dataset (with generated data) from the original dataset?

Answer:

  1. Validation set by splitting the training set by 25%. Testing set already imported to begin.
  2. 3 functions scaling(),translation() and rotation() are created above
In [ ]:
### Define your architecture here.
### Feel free to use as many code cells as needed.
In [71]:
### architecture

def tf_TSC_model(x, drop_probablity):    
    # Hyperparameters
    mu = 0
    sigma = 0.1
    
    # Convolutional 32x32x3 -> 32x32x6
    c0_W = tf.Variable(tf.truncated_normal(shape=(1,1,3,6), mean = mu, stddev = sigma))
    c0_b = tf.Variable(tf.zeros(6))
    c0   = tf.nn.conv2d(x, c0_W, strides=[1, 1, 1, 1], padding='VALID') + c0_b

    # Relu Activation.
    c0 = tf.nn.relu(c0)
    print('Conv0 {}'.format(c0.get_shape()))
    
    # Convolutional 32x32x6 -> 28x28x6.
    c1_W = tf.Variable(tf.truncated_normal(shape=(5,5,6,6), mean = mu, stddev = sigma))
    c1_b = tf.Variable(tf.zeros(6))
    c1   = tf.nn.conv2d(c0, c1_W, strides=[1, 1, 1, 1], padding='VALID') + c1_b

    # Relu Activation.
    c1 = tf.nn.relu(c1)
        
    # Pooling. Input = 28x28x6. Output = 14x14x6.
    c1 = tf.nn.max_pool(c1, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID')
    print('Conv1 {}'.format(c1.get_shape()))
    c1 = tf.nn.dropout(c1,drop_probablity)
    
    # Convolutional 14x14x6 -> 10x10x12
    c2_W = tf.Variable(tf.truncated_normal(shape=(5, 5, 6,12), mean = mu, stddev = sigma))
    c2_b = tf.Variable(tf.zeros(12))
    c2   = tf.nn.conv2d(c1, c2_W, strides=[1, 1, 1, 1], padding='VALID') + c2_b
    
    # Relu Activation.
    c2 = tf.nn.relu(c2)
    print('Conv2 {}'.format(c2.get_shape()))
    
    c2 = tf.nn.dropout(c2,drop_probablity)

    
    # Convolutional 10x10x12 -> 8x8x24
    c3_W = tf.Variable(tf.truncated_normal(shape=(3,3,12,24), mean = mu, stddev = sigma))
    c3_b = tf.Variable(tf.zeros(24))
    c3   = tf.nn.conv2d(c2, c3_W, strides=[1, 1, 1, 1], padding='VALID') + c3_b
    
    # Relu Activation.
    c3 = tf.nn.relu(c3)
    print('Conv3 {}'.format(c3.get_shape()))
    c3 = tf.nn.dropout(c3,drop_probablity)
    
    # Convolutional 8x8x24 -> 6x6x48
    c4_W = tf.Variable(tf.truncated_normal(shape=(3,3,24,48), mean = mu, stddev = sigma))
    c4_b = tf.Variable(tf.zeros(48))
    c4 = tf.nn.conv2d(c3, c4_W, strides=[1, 1, 1, 1], padding='VALID') + c4_b
    
    # Relu Activation
    c4 = tf.nn.relu(c4)
    print('Conv4 {}'.format(c4.get_shape()))
    
    # Convolutional 6x6x48 -> 4x4x96
    c5_W = tf.Variable(tf.truncated_normal(shape=(3,3,48,96), mean = mu, stddev = sigma))
    c5_b = tf.Variable(tf.zeros(96))
    c5 = tf.nn.conv2d(c4, c5_W, strides=[1, 1, 1, 1], padding='VALID') + c5_b
    
    # Relu Activation
    c5 = tf.nn.relu(c5)
    print('Conv5 {}'.format(c5.get_shape()))
    
    # Convolutional 4x4x96 -> 2x2x182
    c6_W = tf.Variable(tf.truncated_normal(shape=(3,3,96,182), mean = mu, stddev = sigma))
    c6_b = tf.Variable(tf.zeros(182))
    c6 = tf.nn.conv2d(c5, c6_W, strides=[1, 1, 1, 1], padding='VALID') + c6_b
    
    # Relu Activation
    c6 = tf.nn.relu(c6)
    print('Conv6 {}'.format(c6.get_shape()))
    
    # Convolutional 2x2x182 -> 1x1x182
    c7_W = tf.Variable(tf.truncated_normal(shape=(2,2,182,182), mean = mu, stddev = sigma))
    c7_b = tf.Variable(tf.zeros(182))
    c7 = tf.nn.conv2d(c6, c7_W, strides=[1, 1, 1, 1], padding='VALID') + c7_b
    
    # Relu Activation
    c7 = tf.nn.relu(c7)

    print('Conv7 {}'.format(c7.get_shape()))

    # Flatten 1x1x182 ->182.
    f0   = flatten(c7)
    print('Fully connected 0: {}'.format(f0.get_shape()))
    
    # Fully connected 182 -> 150.
    fc1_W = tf.Variable(tf.truncated_normal(shape=(182,150), mean = mu, stddev = sigma))
    fc1_b = tf.Variable(tf.zeros(150))
    fc1   = tf.matmul(f0, fc1_W) + fc1_b
    
    # Relu Activation.
    fc1    = tf.nn.relu(fc1)
    
    print('Fully connected 1: {}'.format(fc1.get_shape()))

    # Fully connected 150 ->118.
    fc2_W  = tf.Variable(tf.truncated_normal(shape=(150,118), mean = mu, stddev = sigma))
    fc2_b  = tf.Variable(tf.zeros(118))
    fc2    = tf.matmul(fc1, fc2_W) + fc2_b
    
    # Relu Activation.
    fc2    = tf.nn.relu(fc2)
    
    print('Fully connected 2: {}'.format(fc2.get_shape()))

    # Fully Connected 118 ->54.
    fc3_W  = tf.Variable(tf.truncated_normal(shape=(118,54), mean = mu, stddev = sigma))
    fc3_b  = tf.Variable(tf.zeros(54))
    output = tf.matmul(fc2, fc3_W) + fc3_b
    
    print('Fully connected 3: {}'.format(output.get_shape()))
    
    return output

Question 3

What does your final architecture look like? (Type of model, layers, sizes, connectivity, etc.) For reference on how to build a deep neural network using TensorFlow, see Deep Neural Network in TensorFlow from the classroom.

Answer:

I used shape=(1,1,3,6),later shape=(5,5,6,6), shape=(3,3,12,24) etc. I incresed to get low level features. I used dropout probablit of 0.8 for dropout and maxpooling. Each cases single vector bias is added.

I started with LeNet example and then different batch size, Epochs but I found better result with convolutional layer with dropout and maxpooing at begining.

Dropout in all cases 80%
0 Convolutional layer: 32x32x3 -> 32x32x6 with Relu Activation.
1 Convolutional layer: 32x32x6 -> 28x28x6 with Relu Activation.
                    # Pooling. Input = 28x28x6. Output = 14x14x6.
                    #Dropout
2 Convolutional layer:14x14x6 -> 10x10x12 with Relu Activation.
                    #Dropout
3 Convolutional layer:10x10x12 -> 8x8x24 with Relu Activation.
                    #Dropout
4 Convolutional layer: 8x8x24 -> 6x6x48 with Relu Activation.
5 Convolutional layer: 6x6x48 -> 4x4x96 with Relu Activation.
6 Convolutional layer: 4x4x96 -> 2x2x182 with Relu Activation.
7 Convolutional layer: 2x2x182 -> 1x1x182 with Relu Activation.
Flatten 1x1x182 ->182.
Fully connected 182 -> 150 with Relu Activation.
Fully connected 150 ->118 with Relu Activation.
Fully Connected 118 ->54 with Relu Activation, so output:54
In [73]:
### Train your model here.
### Feel free to use as many code cells as needed.

import tensorflow as tf
x = tf.placeholder(tf.float32, (None, 32, 32, 3), name="x")
y = tf.placeholder(tf.int32, (None), name="y")
oh = tf.one_hot(y,54)
dropout_probablity = tf.placeholder(tf.float32, (), name="dropout_probablity")

rate = 0.001

logits = tf_TSC_model(x,dropout_probablity)
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(logits, oh)
loss_operation = tf.reduce_mean(cross_entropy)
optimizer = tf.train.AdamOptimizer(learning_rate = rate)
training_operation = optimizer.minimize(loss_operation)
predicN = tf.equal(tf.argmax(logits, 1), tf.argmax(oh, 1))
operN = tf.reduce_mean(tf.cast(predicN, tf.float32))

def evaluate_accuracy(X_data, y_data, sess):
    n = len(X_data)
    total_accuracy = 0
    for offset in range(0,n, BATCH_SIZE):
        batch_x, batch_y = X_data[offset:offset+BATCH_SIZE], y_data[offset:offset+BATCH_SIZE]
        accuracy = sess.run(operN, feed_dict={x: batch_x, y: batch_y,dropout_probablity:1.0})
        total_accuracy += (accuracy * len(batch_x))
    return total_accuracy / n
Conv0 (?, 32, 32, 6)
Conv1 (?, 14, 14, 6)
Conv2 (?, 10, 10, 12)
Conv3 (?, 8, 8, 24)
Conv4 (?, 6, 6, 48)
Conv5 (?, 4, 4, 96)
Conv6 (?, 2, 2, 182)
Conv7 (?, 1, 1, 182)
Fully connected 0: (?, 182)
Fully connected 1: (?, 150)
Fully connected 2: (?, 118)
Fully connected 3: (?, 54)
In [74]:
EPOCHS =3
BATCH_SIZE = 20
tf.add_to_collection('I', x)
tf.add_to_collection('I', y)
tf.add_to_collection('I',oh)
tf.add_to_collection('I',dropout_probablity)
tf.add_to_collection('O', logits)
tf.add_to_collection('O', predicN)
tf.add_to_collection('O', operN)

se = time.time()
with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())
    n = len(X_train)
    
    print("\nTraining")
    print("Started....")
    
    best_validation_accuracy = 0
    for i in range(EPOCHS):
        se = time.time()
        X_train, y_train = shuffle(X_train, y_train)
        for offset in range(0, n, BATCH_SIZE):
            end = offset + BATCH_SIZE
            batch_x, batch_y = X_train[offset:end], y_train[offset:end]
            dp = 0.8
            sess.run(training_operation, feed_dict={x: batch_x, y: batch_y, dropout_probablity:dp})
            
            for sc in range(2):
                rescaled_x, rescaled_y = scaling(batch_x, batch_y, 32 - (sc*2+1))
                sess.run(training_operation, feed_dict={x: rescaled_x, y:rescaled_y, 
                                                        dropout_probablity:dp})
            for t in range(-1, 2):
                translated_x, translated_y = translation(batch_x, batch_y, t*2)
                sess.run(training_operation, feed_dict={x: translated_x, y: translated_y, 
                                                        dropout_probablity: dp})
                
            for t in range(-1, 2):
                translated_x, translated_y = translation(batch_x, batch_y, 0, t*2)
                sess.run(training_operation, feed_dict={x: translated_x, y: translated_y,
                                                       dropout_probablity: dp})
            
            for ro in range(-1, 2):
                rotated_x, rotated_y = rotation(batch_x, batch_y, ro*2)
                sess.run(training_operation, feed_dict={x: rotated_x, y: rotated_y,
                                                       dropout_probablity:dp})
            
        validation_accuracy = evaluate_accuracy(X_validation, y_validation, sess)
        if validation_accuracy > best_validation_accuracy:
            best_validation_accuracy = validation_accuracy
            try:
                saver
            except NameError:
                saver = tf.train.Saver()
            saver.save(sess, ' tf_TSC_model')
            
            print("|tf_TSC_model| model saved")
        
        print("EPOCH {} ...".format(i+1))
        print("Validation Accuracy = {:.3f}".format(validation_accuracy))
        print("Time for epoch: {}".format(time.time() - se))
        
        
    test_accuracy = evaluate_accuracy(X_test, y_test, sess)
    print("Test Accuracy = {:.3f}".format(test_accuracy))

    
print("Time for all session: {}".format(time.time() - se))
Training
Started....
|tf_TSC_model| model saved
EPOCH 1 ...
Validation Accuracy = 0.959
Time for epoch: 455.64936113357544
|tf_TSC_model| model saved
EPOCH 2 ...
Validation Accuracy = 0.975
Time for epoch: 495.4449870586395
EPOCH 3 ...
Validation Accuracy = 0.972
Time for epoch: 499.6492049694061
Test Accuracy = 0.940
Time for all session: 505.781530380249
In [87]:
# Tesing on test data using saved model 'tf_TSC_model'
with tf.Graph().as_default() as g:
    with tf.Session() as sess:
        loader = tf.train.import_meta_graph('/home/octo/Desktop/ tf_TSC_model.meta')
        x, y, oh,dropout_probablity= tf.get_collection('I')[:4]
        logits,predicN,operN = tf.get_collection('O')[:3]
        loader.restore(sess, tf.train.latest_checkpoint('./'))
        print("Test Accuracy: = {:.3f}".format(evaluate_accuracy(X_test, y_test, sess)))
    
Test Accuracy: = 0.929

Question 4

How did you train your model? (Type of optimizer, batch size, epochs, hyperparameters, etc.)

Answer:

Optimization :

I used tf.train.AdamOptimizer

Epochs :

For this model I tried with higher Epochs but result did not improve after 3 or 4. So I train with 3.

Hyperparameters:

Learning rate0.001, batch size of 20. For tf.truncated_normal() function I used 0 mean and 0.1 standard deviation.

Training/validation data:

25% of training data used for validation.

Preprocessing:

Translation by +2, or -2 on the x axis and Rotation by +2 or -2 degrees.

Question 5

What approach did you take in coming up with a solution to this problem? It may have been a process of trial and error, in which case, outline the steps you took to get to the final solution and why you chose those steps. Perhaps your solution involved an already well known implementation or architecture. In this case, discuss why you think this is suitable for the current problem.

Answer:

  1. I trained LeNet with training and valudation data. Got nearly 90% result. So I tried to achive above 95%.
  2. After normalization accuracy imporved
  3. Then tried 7 convolutional layers with drop out, max pooling, flatening and followed by the same fully connected layers as in LeNet
  4. After than I tried with image augmentation functions by rotation, translation of images.
  5. Later application of dropout by 80% drop probablity helped a lot.
  6. Batch size started with above 100 but later I reduced based on frequency of each class.

6th step took time for adjustment if batch and Epochs but lastly all those steps helped me to reach 97% above accuracy.

Reference other than classes used:

https://github.com/rndbrtrnd/udacity-deep-learning

https://github.com/udacity/CarND-LeNet-Lab/blob/master/LeNet-Lab-Solution.ipynb

https://arxiv.org/abs/1606.02228

http://danielnouri.org/notes/2014/12/17/using-convolutional-neural-nets-to-detect-facial-keypoints-tutorial/#second-model-convolutions


Step 3: Test a Model on New Images

Take several pictures of traffic signs that you find on the web or around you (at least five), and run them through your classifier on your computer to produce example results. The classifier might not recognize some local signs but it could prove interesting nonetheless.

You may find signnames.csv useful as it contains mappings from the class id (integer) to the actual sign name.

Implementation

Use the code cell (or multiple code cells, if necessary) to implement the first step of your project. Once you have completed your implementation and are satisfied with the results, be sure to thoroughly answer the questions that follow.

In [ ]:
### Load the images and plot them here.
### Feel free to use as many code cells as needed.
In [90]:
signnames_sorted
Out[90]:
ClassId SignName Occurance
0 2 Speed limit (50km/h) 2250
1 1 Speed limit (30km/h) 2220
2 13 Yield 2160
3 12 Priority road 2100
4 38 Keep right 2070
5 10 No passing for vehicles over 3.5 metric tons 2010
6 4 Speed limit (70km/h) 1980
7 5 Speed limit (80km/h) 1860
8 25 Road work 1500
9 9 No passing 1470
10 7 Speed limit (100km/h) 1440
11 3 Speed limit (60km/h) 1410
12 8 Speed limit (120km/h) 1410
13 11 Right-of-way at the next intersection 1320
14 35 Ahead only 1200
15 18 General caution 1200
16 17 No entry 1110
17 31 Wild animals crossing 780
18 14 Stop 780
19 33 Turn right ahead 689
20 15 No vehicles 630
21 26 Traffic signals 600
22 28 Children crossing 540
23 23 Slippery road 510
24 30 Beware of ice/snow 450
25 16 Vehicles over 3.5 metric tons prohibited 420
26 34 Turn left ahead 420
27 6 End of speed limit (80km/h) 420
28 36 Go straight or right 390
29 22 Bumpy road 390
30 40 Roundabout mandatory 360
31 20 Dangerous curve to the right 360
32 21 Double curve 330
33 39 Keep left 300
34 29 Bicycles crossing 270
35 24 Road narrows on the right 270
36 41 End of no passing 240
37 42 End of no passing by vehicles over 3.5 metric ... 240
38 32 End of all speed and passing limits 240
39 27 Pedestrians 240
40 37 Go straight or left 210
41 19 Dangerous curve to the left 210
42 0 Speed limit (20km/h) 210
In [88]:
# Show one of the images
i = 3897
image = X_train[i]
plt.title(sign_class_name(y_train[i]))
plt.imshow(image)
Out[88]:
<matplotlib.image.AxesImage at 0x7f7404211da0>
In [89]:
import random
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline

index = random.randint(0, len(X_train))
image = X_train[index].squeeze()

#plt.figure(figsize=(1,1))
plt.imshow(image)
print(y_train[index])
#plt.imsave("spped30.jpg",train['features'][45])
42
In [91]:
# Pick 30 random images
sample_indexes = random.sample(range(len(X_train)), 30)
sample_images = [X_train[i] for i in sample_indexes]
sample_labels = [y_train[i] for i in sample_indexes]
In [92]:
print(sample_labels)
[11, 1, 35, 25, 18, 29, 36, 5, 23, 9, 10, 38, 8, 2, 25, 31, 32, 35, 15, 29, 25, 2, 25, 18, 4, 9, 42, 4, 25, 10]

Question 6

Choose five candidate images of traffic signs and provide them in the report. Are there any particular qualities of the image(s) that might make classification difficult? It could be helpful to plot the images in the notebook.

Answer:

In [93]:
plt.title(sign_class_name(sample_labels[20]))
plt.imshow(sample_images[20])
Out[93]:
<matplotlib.image.AxesImage at 0x7f74043d9e80>
In [104]:
### Run the predictions here.
### Feel free to use as many code cells as needed.
with tf.Graph().as_default() as g:
    with tf.Session() as sess:
        loader = tf.train.import_meta_graph('/home/octo/Desktop/ tf_TSC_model.meta')
        x, y, oh,dropout_probablity= tf.get_collection('I')[:4]
        logits,predicN,operN = tf.get_collection('O')[:3]
        loader.restore(sess, tf.train.latest_checkpoint('./'))
        best5 = tf.nn.top_k(tf.nn.softmax(logits), 5)
        output = sess.run(best5, feed_dict={x:sample_images, y:sample_labels,dropout_probablity: 1.0})
        pred_best_5 = output.values
        best5_indices = output.indices
        
        
        for i in range(len(pred_best_5)):
            print('Example {i} -> {actual}'.format(i=i, actual=sign_class_name(sample_labels[i])))
            for j in range(5):
                traffic_sign = sign_class_name(best5_indices[i][j])
                probability = pred_best_5[i][j]
                print("#>{traffic_sign} -> {probability}".format(traffic_sign=traffic_sign, 
                                                                     probability=probability))
        print("Test Accuracy: = {:.3f}".format(evaluate_accuracy(X_test, y_test, sess)))
 
Example 0 -> Right-of-way at the next intersection
#>Right-of-way at the next intersection -> 1.0
#>Pedestrians -> 4.982272143680916e-27
#>Double curve -> 6.900536573925874e-29
#>Beware of ice/snow -> 4.396342406856426e-32
#>Priority road -> 2.7030169336212025e-32
Example 1 -> Speed limit (30km/h)
#>Speed limit (30km/h) -> 0.9974532723426819
#>Speed limit (20km/h) -> 0.0025466401129961014
#>Speed limit (70km/h) -> 1.1156661372524468e-07
#>Speed limit (100km/h) -> 3.019326300091052e-08
#>Speed limit (80km/h) -> 8.422746528014002e-11
Example 2 -> Ahead only
#>Ahead only -> 1.0
#>Turn left ahead -> 9.943532108813713e-12
#>Speed limit (60km/h) -> 3.1294465620777273e-18
#>No passing -> 4.107832236313754e-22
#>Priority road -> 3.28191380120908e-23
Example 3 -> Road work
#>Road work -> 1.0
#>Priority road -> 8.2367403116601165e-22
#>Dangerous curve to the right -> 3.6083722116933983e-22
#>Keep right -> 2.37479081338186e-24
#>Beware of ice/snow -> 4.859903331320813e-26
Example 4 -> General caution
#>General caution -> 1.0
#>Traffic signals -> 1.5349158610789004e-09
#>Pedestrians -> 3.121369829273135e-10
#>Road narrows on the right -> 2.144489004979994e-15
#>Right-of-way at the next intersection -> 2.555588364653784e-29
Example 5 -> Bicycles crossing
#>Bicycles crossing -> 0.990509033203125
#>Slippery road -> 0.008604883216321468
#>Wild animals crossing -> 0.00038359311292879283
#>Road work -> 0.00024941936135292053
#>Double curve -> 0.00013588971341960132
Example 6 -> Go straight or right
#>Go straight or right -> 1.0
#>Priority road -> 2.7161480258951903e-14
#>No passing for vehicles over 3.5 metric tons -> 2.5439990825742997e-14
#>Vehicles over 3.5 metric tons prohibited -> 1.6978329888382195e-14
#>Roundabout mandatory -> 1.1071440598618872e-15
Example 7 -> Speed limit (80km/h)
#>Speed limit (80km/h) -> 0.999930739402771
#>Speed limit (60km/h) -> 4.549733785097487e-05
#>Speed limit (50km/h) -> 1.4909220226400066e-05
#>Speed limit (100km/h) -> 8.24717790237628e-06
#>Speed limit (30km/h) -> 6.207940259628231e-07
Example 8 -> Slippery road
#>Slippery road -> 1.0
#>Dangerous curve to the left -> 5.946181030604082e-15
#>Right-of-way at the next intersection -> 2.6246452416836144e-17
#>Wild animals crossing -> 1.2092593441473632e-19
#>Double curve -> 2.801140838777418e-20
Example 9 -> No passing
#>No passing -> 1.0
#>No passing for vehicles over 3.5 metric tons -> 2.4039044917796484e-15
#>Priority road -> 7.577128112644507e-24
#>Vehicles over 3.5 metric tons prohibited -> 7.537970240600694e-26
#>Roundabout mandatory -> 3.4312337074323884e-26
Example 10 -> No passing for vehicles over 3.5 metric tons
#>No passing for vehicles over 3.5 metric tons -> 0.9999947547912598
#>Speed limit (80km/h) -> 2.696377805477823e-06
#>No passing -> 2.4791893338260707e-06
#>Speed limit (60km/h) -> 1.2440344221431587e-07
#>Dangerous curve to the left -> 4.2585495130254e-09
Example 11 -> Keep right
#>Keep right -> 1.0
#>Speed limit (80km/h) -> 5.1506204094349634e-17
#>Go straight or right -> 6.288392452750627e-18
#>Speed limit (30km/h) -> 3.3975523422184147e-18
#>Priority road -> 1.9384607491856086e-18
Example 12 -> Speed limit (120km/h)
#>Speed limit (120km/h) -> 0.9999953508377075
#>Speed limit (70km/h) -> 2.7649987259792397e-06
#>Speed limit (100km/h) -> 1.8147750324715162e-06
#>Speed limit (30km/h) -> 5.195236241206658e-08
#>Speed limit (20km/h) -> 7.592144968260328e-11
Example 13 -> Speed limit (50km/h)
#>Speed limit (50km/h) -> 0.9987867474555969
#>Speed limit (30km/h) -> 0.0011023340048268437
#>Speed limit (70km/h) -> 9.03235049918294e-05
#>Speed limit (80km/h) -> 1.5633740986231714e-05
#>Speed limit (120km/h) -> 4.923148026136914e-06
Example 14 -> Road work
#>Road work -> 1.0
#>Dangerous curve to the right -> 5.257674449718636e-34
#>Priority road -> 1.2217627598289958e-37
#>Speed limit (20km/h) -> 0.0
#>Speed limit (30km/h) -> 0.0
Example 15 -> Wild animals crossing
#>Wild animals crossing -> 0.9999926090240479
#>Double curve -> 7.3959868132078554e-06
#>Slippery road -> 2.490872905071967e-13
#>Speed limit (50km/h) -> 1.1594643702182023e-13
#>Road work -> 1.948079725148672e-14
Example 16 -> End of all speed and passing limits
#>End of all speed and passing limits -> 0.969563364982605
#>End of speed limit (80km/h) -> 0.014796677976846695
#>End of no passing -> 0.010453319177031517
#>Dangerous curve to the right -> 0.004190375097095966
#>Beware of ice/snow -> 0.0004925180110149086
Example 17 -> Ahead only
#>Ahead only -> 1.0
#>Turn left ahead -> 8.552246028871967e-16
#>Speed limit (60km/h) -> 2.1588902763859204e-26
#>No passing -> 3.451266475035606e-33
#>Priority road -> 4.8637786905500684e-34
Example 18 -> No vehicles
#>No vehicles -> 0.9999233484268188
#>Speed limit (70km/h) -> 7.575912604806945e-05
#>Speed limit (50km/h) -> 8.2795327216445e-07
#>Speed limit (30km/h) -> 3.789654101638007e-08
#>Speed limit (120km/h) -> 9.183338534057839e-09
Example 19 -> Bicycles crossing
#>Bicycles crossing -> 0.9962336421012878
#>Slippery road -> 0.003743301145732403
#>Wild animals crossing -> 1.8601702322484925e-05
#>Double curve -> 2.9922782687208382e-06
#>Children crossing -> 6.675064128103259e-07
Example 20 -> Road work
#>Road work -> 1.0
#>Dangerous curve to the right -> 5.35036157540484e-26
#>Priority road -> 3.870234030815904e-27
#>Keep right -> 4.179930017738685e-29
#>Beware of ice/snow -> 4.5370393828214593e-32
Example 21 -> Speed limit (50km/h)
#>Speed limit (50km/h) -> 0.9999456405639648
#>Speed limit (30km/h) -> 5.3575415222439915e-05
#>Speed limit (80km/h) -> 4.905589321424486e-07
#>Speed limit (70km/h) -> 2.9528465006478655e-07
#>Speed limit (100km/h) -> 1.8714731153668396e-10
Example 22 -> Road work
#>Road work -> 1.0
#>Dangerous curve to the right -> 1.0362465757626731e-25
#>Priority road -> 3.2612773231880146e-27
#>Keep right -> 7.100519481931833e-28
#>Speed limit (50km/h) -> 1.5039110155211231e-31
Example 23 -> General caution
#>General caution -> 0.9832806587219238
#>Traffic signals -> 0.014077975414693356
#>Pedestrians -> 0.0022418282460421324
#>Road narrows on the right -> 0.00039909547194838524
#>Right-of-way at the next intersection -> 2.3372582802494435e-07
Example 24 -> Speed limit (70km/h)
#>Speed limit (70km/h) -> 0.999997615814209
#>Speed limit (30km/h) -> 2.4392520572291687e-06
#>Speed limit (20km/h) -> 6.442137490125788e-09
#>Speed limit (120km/h) -> 6.784305894846909e-10
#>Speed limit (50km/h) -> 4.514807416811095e-10
Example 25 -> No passing
#>No passing -> 1.0
#>No passing for vehicles over 3.5 metric tons -> 4.629058096040917e-13
#>Vehicles over 3.5 metric tons prohibited -> 1.20130858749708e-21
#>Slippery road -> 3.3080595546162354e-22
#>Stop -> 5.892056374726016e-23
Example 26 -> End of no passing by vehicles over 3.5 metric tons
#>End of no passing by vehicles over 3.5 metric tons -> 0.9999936819076538
#>End of speed limit (80km/h) -> 6.0588590713450685e-06
#>Speed limit (100km/h) -> 8.070087886835609e-08
#>End of no passing -> 7.581045480264947e-08
#>Speed limit (60km/h) -> 7.009221292264556e-08
Example 27 -> Speed limit (70km/h)
#>Speed limit (70km/h) -> 1.0
#>Speed limit (30km/h) -> 9.463883534266068e-14
#>Speed limit (20km/h) -> 4.804214914506834e-20
#>Speed limit (50km/h) -> 6.1468849528388186e-21
#>Speed limit (120km/h) -> 2.5745314989890875e-22
Example 28 -> Road work
#>Road work -> 1.0
#>Priority road -> 2.105622121106876e-09
#>Dangerous curve to the right -> 9.189080052429688e-11
#>Beware of ice/snow -> 3.375518961568069e-11
#>Keep right -> 1.8911736759941178e-11
Example 29 -> No passing for vehicles over 3.5 metric tons
#>No passing for vehicles over 3.5 metric tons -> 1.0
#>No passing -> 2.1571376209264005e-14
#>Dangerous curve to the left -> 4.872089043377005e-15
#>Speed limit (80km/h) -> 2.986359709937253e-15
#>Speed limit (60km/h) -> 7.498933539106535e-19
Test Accuracy: = 0.929

Question 7

Is your model able to perform equally well on captured pictures when compared to testing on the dataset? The simplest way to do this check the accuracy of the predictions. For example, if the model predicted 1 out of 5 signs correctly, it's 20% accurate.

NOTE: You could check the accuracy manually by using signnames.csv (same directory). This file has a mapping from the class id (0-42) to the corresponding sign name. So, you could take the class id the model outputs, lookup the name in signnames.csv and see if it matches the sign from the image.

Answer:

In [ ]:
### Visualize the softmax probabilities here.
### Feel free to use as many code cells as needed.

Manual checking using signnames.csv

model predicted 30 out 30 signs correctly, it's 100% accurate.

Example 0 -> Right-of-way at the next intersection

>Right-of-way at the next intersection -> 1.0

Example 1 -> Speed limit (30km/h)

>Speed limit (30km/h) -> 0.9974532723426819

Example 2 -> Ahead only

>Ahead only -> 1.0

Example 3 -> Road work

>Road work -> 1.0

Example 4 -> General caution

>General caution -> 1.0

Example 5 -> Bicycles crossing

>Bicycles crossing -> 0.990509033203125

Example 6 -> Go straight or right

>Go straight or right -> 1.0

Example 7 -> Speed limit (80km/h)

>Speed limit (80km/h) -> 0.999930739402771

Example 8 -> Slippery road

>Slippery road -> 1.0

Example 9 -> No passing

>No passing -> 1.0

Example 10 -> No passing for vehicles over 3.5 metric tons

>No passing for vehicles over 3.5 metric tons -> 0.9999947547912598

Example 11 -> Keep right

>Keep right -> 1.0

Example 12 -> Speed limit (120km/h)

>Speed limit (120km/h) -> 0.9999953508377075

Example 13 -> Speed limit (50km/h)

>Speed limit (50km/h) -> 0.9987867474555969

Example 14 -> Road work

>Road work -> 1.0

Example 15 -> Wild animals crossing

>Wild animals crossing -> 0.9999926090240479

Example 16 -> End of all speed and passing limits

>End of all speed and passing limits -> 0.969563364982605

Example 17 -> Ahead only

>Ahead only -> 1.0

Example 18 -> No vehicles

>No vehicles -> 0.9999233484268188

Example 19 -> Bicycles crossing

>Bicycles crossing -> 0.9962336421012878

Example 20 -> Road work

>Road work -> 1.0

Example 21 -> Speed limit (50km/h)

>Speed limit (50km/h) -> 0.9999456405639648

Example 22 -> Road work

>Road work -> 1.0

Example 23 -> General caution

>General caution -> 0.9832806587219238

Example 24 -> Speed limit (70km/h)

>Speed limit (70km/h) -> 0.999997615814209

Example 25 -> No passing

>No passing -> 1.0

Example 26 -> End of no passing by vehicles over 3.5 metric tons

>End of no passing by vehicles over 3.5 metric tons -> 0.9999936819076538

Example 27 -> Speed limit (70km/h)

>Speed limit (70km/h) -> 1.0

Example 28 -> Road work

>Road work -> 1.0

Example 29 -> No passing for vehicles over 3.5 metric tons

>No passing for vehicles over 3.5 metric tons -> 1.0

Question 8

Use the model's softmax probabilities to visualize the certainty of its predictions, tf.nn.top_k could prove helpful here. Which predictions is the model certain of? Uncertain? If the model was incorrect in its initial prediction, does the correct prediction appear in the top k? (k should be 5 at most)

tf.nn.top_k will return the values and indices (class ids) of the top k predictions. So if k=3, for each sign, it'll return the 3 largest probabilities (out of a possible 43) and the correspoding class ids.

Answer:

tf.nn.top_k(tf.nn.softmax(logits), 5)

Above softmax function is used in side tf.nn.top_k() is used to get top 5 candidate

Note: Once you have completed all of the code implementations and successfully answered each question above, you may finalize your work by exporting the iPython Notebook as an HTML document. You can do this by using the menu above and navigating to \n", "File -> Download as -> HTML (.html). Include the finished document along with this notebook as your submission.